
There is growing talk of an “AI confidence gap” — often framed as a question of age or familiarity with tools.
I do not think it’s about either.
I think it’s about how we relate to assistance, judgment and purpose at work.
Earlier in my career, I was fortunate to work with outstanding personal assistants.
They did not do my job — they amplified it.
They protected focus.
They improved quality.
They applied judgment around priorities.
When many of those roles were removed, and replaced with early digital tools, efficiency did not increase — responsibility fragmented.
We became operators rather than decision-makers.
Those with strong discipline survived; others became “Busier”, but not better.
What feels different with today’s AI tools is this:
Used well, they can restore that assistant relationship —
but only if purpose comes first.
The most effective pattern I have seen (and use myself) is simple:
• Prepare your thinking first
• Define the scenario and intent clearly
• Use AI to challenge, refine, or improve — not to replace judgment
• Re-insert your voice, values and accountability
• Avoid overuse — returns diminish quickly
In short: use AI as an assistant, not a surrogate self.
From a career-development perspective, this matters because AI consistently rewards people who bring:
• clarity of intent
• critical thinking
• contextual judgment
• communication and synthesis
AI does not replace these skills — it exposes their absence.
I feel those who can direct intelligent tools with purpose will compound quality, efficiency, and learning.
Those who outsource their thinking to them will likely plateau — or worse.
So perhaps confidence with AI at work is not about knowing prompts.
It’s about knowing what you are trying to achieve, why it matters and how to apply judgment with intent.
That, to me, is the real confidence gap.
